基于运输的指标和相关嵌入(转换)最近已用于模拟存在非线性结构或变化的信号类。在本文中,我们研究了具有广义的瓦斯汀度量的时间序列数据的测量特性,以及与它们在嵌入空间中签名的累积分布变换有关的几何形状。此外,我们展示了如何理解这种几何特征可以为某些时间序列分类器提供可解释性,并成为更强大的分类器的灵感。
translated by 谷歌翻译
本文使用签名的累积分布变换(SCDT)提出了一种新的端到端信号分类方法。我们采用基于运输的生成模型来定义分类问题。然后,我们利用SCDT的数学属性来使问题更容易在变换域中,并使用SCDT域中的最接近局部子空间(NLS)搜索算法求解未知样本的类。实验表明,所提出的方法提供了高精度的分类结果,同时又有数据效率,对分布样本的强大稳定性以及相对于深度学习端到端分类方法的计算复杂性而具有竞争力。在Python语言中的实现将其作为软件包Pytranskit(https://github.com/rohdelab/pytranskit)的一部分集成。
translated by 谷歌翻译
深度卷积神经网络(CNNS)广泛地被认为是最先进的通用端到端图像分类系统。然而,当训练数据受到限制时,它们众所周知,他们需要渲染方法计算得昂贵并且并不总是有效的数据增强策略。而不是使用数据增强策略来编码在机器学习中通常在机器学习中进行的修正,而我们建议通过利用氡累积分配变换(R-CDT)的某些数学属性来数学上增强切片 - Wasserstein空间中最近的子空间分类模型。最近引入的图像变换。我们证明,对于特定类型的学习问题,我们的数学解决方案在分类精度和计算复杂性方面具有深度CNN的数据增强,并且在有限的训练数据设置下特别有效。该方法简单,有效,计算高效,不迭代,不需要调整参数。实现我们的方法的Python代码可在https://github.com/rohdelab/mathemation_augmentation中获得。我们的方法是作为软件包Pytranskit的一部分,可在https://github.com/rohdelab/pytranskit中获得。
translated by 谷歌翻译
我们研究了一种基于对抗性训练(AT)的学习基于能量的模型(EBM)的新方法。我们表明(二进制)学习一种特殊的能量功能,可以模拟数据分布的支持,并且学习过程与基于MCMC的EBM的最大似然学习密切相关。我们进一步提出了改进的与AT生成建模的技术,并证明这种新方法能够产生多样化和现实的图像。除了具有竞争性的图像生成性能到明确的EBM外,研究的方法还可以稳定训练,非常适合图像翻译任务,并且表现出强大的分布外对抗性鲁棒性。我们的结果证明了AT生成建模方法的生存能力,表明AT是学习EBM的竞争性替代方法。
translated by 谷歌翻译
深度神经网络针对对抗性例子的脆弱性已成为将这些模型部署在敏感领域中的重要问题。事实证明,针对这种攻击的明确防御是具有挑战性的,依赖于检测对抗样本的方法只有在攻击者忽略检测机制时才有效。在本文中,我们提出了一种原则性的对抗示例检测方法,该方法可以承受规范受限的白色框攻击。受K类分类问题的启发,我们训练K二进制分类器,其中I-th二进制分类器用于区分I类的清洁数据和其他类的对抗性样本。在测试时,我们首先使用训练有素的分类器获取输入的预测标签(例如k),然后使用k-th二进制分类器来确定输入是否为干净的样本(k类)或对抗的扰动示例(其他类)。我们进一步设计了一种生成方法来通过将每个二进制分类器解释为类别条件数据的无标准密度模型来检测/分类对抗示例。我们提供上述对抗性示例检测/分类方法的全面评估,并证明其竞争性能和引人注目的特性。
translated by 谷歌翻译
我们使用签名的累积分布变换(SCDT)来描述一种信号参数估计的方法,这是一种基于最佳传输理论的最近引入的信号表示工具。该方法基于最初用于正分布引入的累积分布变换(CDT)的信号估计。具体而言,我们表明,可以简单地使用SCDT空间中的线性最小二乘技术来进行任意信号类别的线性最小二乘技术,从而为任意信号类别进行最小化,从而为估计问题提供了全局最小化,即使基础信号是未知参数的非线性函数,也为全局最小化。使用$ L_P $最小化与当前信号估计方法的比较显示了该方法的优势。
translated by 谷歌翻译
Designing experiments often requires balancing between learning about the true treatment effects and earning from allocating more samples to the superior treatment. While optimal algorithms for the Multi-Armed Bandit Problem (MABP) provide allocation policies that optimally balance learning and earning, they tend to be computationally expensive. The Gittins Index (GI) is a solution to the MABP that can simultaneously attain optimality and computationally efficiency goals, and it has been recently used in experiments with Bernoulli and Gaussian rewards. For the first time, we present a modification of the GI rule that can be used in experiments with exponentially-distributed rewards. We report its performance in simulated 2- armed and 3-armed experiments. Compared to traditional non-adaptive designs, our novel GI modified design shows operating characteristics comparable in learning (e.g. statistical power) but substantially better in earning (e.g. direct benefits). This illustrates the potential that designs using a GI approach to allocate participants have to improve participant benefits, increase efficiencies, and reduce experimental costs in adaptive multi-armed experiments with exponential rewards.
translated by 谷歌翻译
Modelling and forecasting real-life human behaviour using online social media is an active endeavour of interest in politics, government, academia, and industry. Since its creation in 2006, Twitter has been proposed as a potential laboratory that could be used to gauge and predict social behaviour. During the last decade, the user base of Twitter has been growing and becoming more representative of the general population. Here we analyse this user base in the context of the 2021 Mexican Legislative Election. To do so, we use a dataset of 15 million election-related tweets in the six months preceding election day. We explore different election models that assign political preference to either the ruling parties or the opposition. We find that models using data with geographical attributes determine the results of the election with better precision and accuracy than conventional polling methods. These results demonstrate that analysis of public online data can outperform conventional polling methods, and that political analysis and general forecasting would likely benefit from incorporating such data in the immediate future. Moreover, the same Twitter dataset with geographical attributes is positively correlated with results from official census data on population and internet usage in Mexico. These findings suggest that we have reached a period in time when online activity, appropriately curated, can provide an accurate representation of offline behaviour.
translated by 谷歌翻译
Existing federated classification algorithms typically assume the local annotations at every client cover the same set of classes. In this paper, we aim to lift such an assumption and focus on a more general yet practical non-IID setting where every client can work on non-identical and even disjoint sets of classes (i.e., client-exclusive classes), and the clients have a common goal which is to build a global classification model to identify the union of these classes. Such heterogeneity in client class sets poses a new challenge: how to ensure different clients are operating in the same latent space so as to avoid the drift after aggregation? We observe that the classes can be described in natural languages (i.e., class names) and these names are typically safe to share with all parties. Thus, we formulate the classification problem as a matching process between data representations and class representations and break the classification model into a data encoder and a label encoder. We leverage the natural-language class names as the common ground to anchor the class representations in the label encoder. In each iteration, the label encoder updates the class representations and regulates the data representations through matching. We further use the updated class representations at each round to annotate data samples for locally-unaware classes according to similarity and distill knowledge to local models. Extensive experiments on four real-world datasets show that the proposed method can outperform various classical and state-of-the-art federated learning methods designed for learning with non-IID data.
translated by 谷歌翻译
Learning with noisy-labels has become an important research topic in computer vision where state-of-the-art (SOTA) methods explore: 1) prediction disagreement with co-teaching strategy that updates two models when they disagree on the prediction of training samples; and 2) sample selection to divide the training set into clean and noisy sets based on small training loss. However, the quick convergence of co-teaching models to select the same clean subsets combined with relatively fast overfitting of noisy labels may induce the wrong selection of noisy label samples as clean, leading to an inevitable confirmation bias that damages accuracy. In this paper, we introduce our noisy-label learning approach, called Asymmetric Co-teaching (AsyCo), which introduces novel prediction disagreement that produces more consistent divergent results of the co-teaching models, and a new sample selection approach that does not require small-loss assumption to enable a better robustness to confirmation bias than previous methods. More specifically, the new prediction disagreement is achieved with the use of different training strategies, where one model is trained with multi-class learning and the other with multi-label learning. Also, the new sample selection is based on multi-view consensus, which uses the label views from training labels and model predictions to divide the training set into clean and noisy for training the multi-class model and to re-label the training samples with multiple top-ranked labels for training the multi-label model. Extensive experiments on synthetic and real-world noisy-label datasets show that AsyCo improves over current SOTA methods.
translated by 谷歌翻译